Goto

Collaborating Authors

 probabilistic logic program



On the Independencies Hidden in the Structure of a Probabilistic Logic Program

Rückschloß, Kilian, Weitkämper, Felix

arXiv.org Artificial Intelligence

Pearl and Verma developed d-separation as a widely used graphical criterion to reason about the conditional independencies that are implied by the causal structure of a Bayesian network. As acyclic ground probabilistic logic programs correspond to Bayesian networks on their dependency graph, we can compute conditional independencies from d-separation in the latter. In the present paper, we generalize the reasoning above to the non-ground case. First, we abstract the notion of a probabilistic logic program away from external databases and probabilities to obtain so-called program structures. We then present a correct meta-interpreter that decides whether a certain conditional independence statement is implied by a program structure on a given external database. Finally, we give a fragment of program structures for which we obtain a completeness statement of our conditional independence oracle. We close with an experimental evaluation of our approach revealing that our meta-interpreter performs significantly faster than checking the definition of independence using exact inference in ProbLog 2.


dPASP: A Comprehensive Differentiable Probabilistic Answer Set Programming Environment For Neurosymbolic Learning and Reasoning

Geh, Renato Lui, Gonçalves, Jonas, Silveira, Igor Cataneo, Mauá, Denis Deratani, Cozman, Fabio Gagliardi

arXiv.org Artificial Intelligence

We present dPASP, a novel declarative probabilistic logic programming framework for differentiable neuro-symbolic reasoning. The framework allows for the specification of discrete probabilistic models with neural predicates, logic constraints and interval-valued probabilistic choices, thus supporting models that combine low-level perception (images, texts, etc), common-sense reasoning, and (vague) statistical knowledge. To support all such features, we discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge. We also discuss how gradient-based learning can be performed with neural predicates and probabilistic choices under selected semantics. We then describe an implemented package that supports inference and learning in the language, along with several example programs. The package requires minimal user knowledge of deep learning system's inner workings, while allowing end-to-end training of rather sophisticated models and loss functions.


Buchman

AAAI Conferences

Probabilistic logic programs without negation can have cycles (with a preference for false), but cannot represent all conditional distributions. Probabilistic logic programs with negation can represent arbitrary conditional probabilities, but with cycles they create logical inconsistencies. We show how allowing negative noise probabilities allows us to represent arbitrary conditional probabilities without negations. Noise probabilities for non-exclusive rules are difficult to interpret and unintuitive to manipulate; to alleviate this we define probability-strengths'' which provide an intuitive additive algebra for combining rules. For acyclic programs we prove what constraints on the strengths allow for proper distributions on the non-noise variables and allow for all non-extreme distributions to be represented. We show how arbitrary CPDs can be converted into this form in a canonical way. Furthermore, if a joint distribution can be compactly represented by a cyclic program with negations, we show how it can also be compactly represented with negative noise probabilities and no negations. This allows algorithms for exact inference that do not support negations to be applicable to probabilistic logic programs with negations.


Syntactic Requirements for Well-defined Hybrid Probabilistic Logic Programs

Azzolini, Damiano, Riguzzi, Fabrizio

arXiv.org Artificial Intelligence

The power and expressivity of Probabilistic Logic Programming (PLP) [8, 18] have been utilized to represent many real world situations [2, 9, 14]. Usually, probabilistic logic programs involve only discrete random variables with Bernoulli or Categorical distributions. Numerous solutions emerged to also handle continuous distributions [10, 12, 25], increasing the expressiveness of PLP and giving birth to hybrid probabilistic logic programs, that is, programs that include discrete and continuous random variables. Inference in this type of programs is hard since it combines the complexity of the grounding computation with the intractability of a distribution defined by a mixture of random variables. Usually, inference in general hybrid probabilistic logic programs (i.e., without imposing restrictions on the type of distributions allowed) is done by leveraging knowledge compilation and using external solvers [25] or by sampling [4, 16].


An asymptotic analysis of probabilistic logic programming with implications for expressing projective families of distributions

Weitkämper, Felix

arXiv.org Artificial Intelligence

Over the last years, there has been increasing research on the scaling behaviour of statistical relational representations with the size of the domain, and on the connections between domain size dependence and lifted inference. In particular, the asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was isolated as the strongest form of domain size independence. In this contribution we show that every probabilistic logic program under the distribution semantics is asymptotically equivalent to a probabilistic logic program consisting only of range-restricted clauses over probabilistic facts. To facilitate the application of classical results from finite model theory, we introduce the abstract distribution semantics, defined as an arbitrary logical theory over probabilistic facts to bridge the gap to the distribution semantics underlying probabilistic logic programming. In this representation, range-restricted logic programs correspond to quantifier-free theories, making asymptotic quantifier results avilable for use. We can conclude that every probabilistic logic program inducing a projective family of distributions is in fact captured by this class, and we can infer interesting consequences for the expressivity of probabilistic logic programs as well as for the asymptotic behaviour of probabilistic rules.


Generating Random Logic Programs Using Constraint Programming

Dilkas, Paulius, Belle, Vaishak

arXiv.org Artificial Intelligence

Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm's superiority over another. However, when it comes to inference algorithms for probabilistic logic programs, experimental evaluations are limited to only a few programs. Existing methods to generate random logic programs are limited to propositional programs and often impose stringent syntactic restrictions. We present a novel approach to generating random logic programs and random probabilistic logic programs using constraint programming, introducing a new constraint to control the independence structure of the underlying probability distribution. We also provide a combinatorial argument for the correctness of the model, show how the model scales with parameter values, and use the model to compare probabilistic inference algorithms across a range of synthetic problems. Our model allows inference algorithm developers to evaluate and compare the algorithms across a wide range of instances, providing a detailed picture of their (comparative) strengths and weaknesses.


Learning Probabilistic Logic Programs in Continuous Domains

Speichert, Stefanie, Belle, Vaishak

arXiv.org Artificial Intelligence

The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. Perhaps the most successful paradigm in the field is probabilistic logic programming: the enabling of stochastic primitives in logic programming, which is now increasingly seen to provide a declarative background to complex machine learning applications. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. In that regard, inductive logic programming and related techniques have paved much of the way for the last few decades. Unfortunately, a major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. Recently, a handful of systems have been extended to represent and perform inference with continuous distributions. The problem, of course, is that classical solutions for inference are either restricted to well-known parametric families (e.g., Gaussians) or resort to sampling strategies that provide correct answers only in the limit. When it comes to learning, moreover, inducing representations remains entirely open, other than "data-fitting" solutions that force-fit points to aforementioned parametric families. In this paper, we take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families. Our key insight is to leverage techniques from piecewise polynomial function approximation theory, yielding a principled way to learn and compositionally construct density functions. We test the framework and discuss the learned representations.



Stable Model Counting and Its Application in Probabilistic Logic Programming

Aziz, Rehan Abdul (The University of Melbourne) | Chu, Geoffrey (The University of Melbourne) | Muise, Christian (The University of Melbourne) | Stuckey, Peter James (The University of Melbourne)

AAAI Conferences

Model counting is the problem of computing the number of models that satisfy a given propositional theory. It has recently been applied to solving inference tasks in probabilistic logic programming, where the goal is to compute the probability of given queries being true provided a set of mutually independent random variables, a model (a logic program) and some evidence. The core of solving this inference task involves translating the logic program to a propositional theory and using a model counter. In this paper, we show that for some problems that involve inductive definitions like reachability in a graph, the translation of logic programs to SAT can be expensive for the purpose of solving inference tasks. For such problems, direct implementation of stable model semantics allows for more efficient solving. We present two implementation techniques, based on unfounded set detection, that extend a propositional model counter to a stable model counter. Our experiments show that for particular problems, our approach can outperform a state-of-the-art probabilistic logic programming solver by several orders of magnitude in terms of running time and space requirements, and can solve instances of significantly larger sizes on which the current solver runs out of time or memory.